11 research outputs found

    Split and Migrate: Resource-Driven Placement and Discovery of Microservices at the Edge

    Get PDF
    Microservices architectures combine the use of fine-grained and independently-scalable services with lightweight communication protocols, such as REST calls over HTTP. Microservices bring flexibility to the development and deployment of application back-ends in the cloud. Applications such as collaborative editing tools require frequent interactions between the front-end running on users\u27 machines and a back-end formed of multiple microservices. User-perceived latencies depend on their connection to microservices, but also on the interaction patterns between these services and their databases. Placing services at the edge of the network, closer to the users, is necessary to reduce user-perceived latencies. It is however difficult to decide on the placement of complete stateful microservices at one specific core or edge location without trading between a latency reduction for some users and a latency increase for the others. We present how to dynamically deploy microservices on a combination of core and edge resources to systematically reduce user-perceived latencies. Our approach enables the split of stateful microservices, and the placement of the resulting splits on appropriate core and edge sites. Koala, a decentralized and resource-driven service discovery middleware, enables REST calls to reach and use the appropriate split, with only minimal changes to a legacy microservices application. Locality awareness using network coordinates further enables to automatically migrate services split and follow the location of the users. We confirm the effectiveness of our approach with a full prototype and an application to ShareLatex, a microservices-based collaborative editing application

    Composants de bases paresseux et prĂ©servant la localitĂ© pour la gestion d’une insfrastructure fog : application Ă  la dĂ©couverte de services

    No full text
    In the last decade, cloud computing has grown to become the standard deployment environment for most distributed applications. While cloud providers have continuously extended their coverage to different locations worldwide, the distance of their datacenters to the end users still often translates into significant latency and network utilization. With the advent of new families of applications such as virtual/augmented reality and self-driving vehicles, which operate on very low latency, or the IoT, which generates enormous amounts of data, the current centralized cloud infrastructure has shown to be unable to support their stringent requirements. This has shifted the focus to more distributed alternatives such as fog computing. Although the premises of such infrastructure seem auspicious, a standard fog management platform is yet to emerge. Consequently, significant attention is dedicated to capturing the right design requirements for delivering those premises. In this dissertation, we aim at designing building blocks which can provide basic functionalities for fog management tasks. Starting from the basic fog principle of preserving locality, we design a lazy and locality-aware overlay network called Koala, which provides efficient decentralized management without introducing additional traffic overhead. In order to capture additional requirements which originate from the application layer, we port a well-known microservice-based application, namely Sharelatex, to a fog environment. We examine how its performance is affected and what functionalities the management layer can provide in order to facilitate its fog deployment and improve its performance. Based on our overlay building block and the requirements retrieved from the fog deployment of the application, we design a service discovery mechanism which satisfies those requirements and integrates these components into a single prototype. This full stack prototype enables a complete end-to-end evaluation of these components based on real use case scenarios.Au cours de la derniĂšre dĂ©cennie, le cloud computing est devenu l’environnement standard de dĂ©ploiement pour la plupart des applications distribuĂ©es. Alors que les fournisseurs de cloud ont Ă©tendu de maniĂšre continue leur couverture gĂ©ographique, la distance entre leurs centres de donnĂ©es et les utilisateurs finaux se traduit toujours par une latence et une utilisation du rĂ©seau importantes. Avec l'avĂšnement de nouvelles familles d'applications telles que la rĂ©alitĂ© virtuelle / augmentĂ©e ou les vĂ©hicules autonomes, nĂ©cessitant de trĂšs faibles latences, ou l'IoT, qui gĂ©nĂšre d'Ă©normes quantitĂ©s de donnĂ©es, l'infrastructure centralisĂ©e des clouds s’avĂšre incapable de supporter leurs exigences. Cette situation a menĂ© Ă  l’expĂ©rimentation de nouvelles alternatives plus distribuĂ©es telles que le fog computing. Bien que les prĂ©misses de cette infrastructure semblent prometteuses, une plate-forme de type fog n’a pas encore Ă©tĂ© crĂ©Ă©e. Par consĂ©quent, une attention particuliĂšre doit ĂȘtre consacrĂ©e Ă  la dĂ©finition des contraintes appropriĂ©es de conception qui permettront de rĂ©aliser pleinement ces objectifs. Dans cette thĂšse, nous visons Ă  concevoir des blocs de construction pouvant fournir des fonctionnalitĂ©s de base pour la gestion d’une infrastructure de type fog. En partant du principe de prĂ©servation de la localitĂ© intrinsĂšque au fog, nous concevons un rĂ©seau de recouvrement paresseux et tenant compte de la localitĂ©, appelĂ© Koala, qui permet une gestion dĂ©centralisĂ©e efficace sans crĂ©er de surcharge de trafic liĂ© Ă  la maintenance du rĂ©seau. Afin de capturer des exigences supplĂ©mentaires provenant de la couche applicative, nous avons Ă©tudiĂ© le dĂ©ploiement d’une application fondĂ©e sur une architecture Ă  base de microservices, Ă  savoir Sharelatex, dans un environnement fog. Nous examinons comment ses performances en sont affectĂ©es et quelles fonctionnalitĂ©s la couche de gestion peut fournir afin de faciliter son dĂ©ploiement dans le fog et amĂ©liorer ses performances. En se fondant sur les blocs de bases dĂ©finis et sur les exigences extraites du dĂ©ploiement de l'application dans le fog, nous concevons un mĂ©canisme de dĂ©couverte de service qui rĂ©pond Ă  ces exigences et intĂšgre ces composants dans un seul prototype. Ce prototype permet une Ă©valuation complĂšte de ces composants sur la base de scĂ©narios dans des conditions rĂ©elles

    Conception de réseaux de recouvrement pour des plates-formes de Cloud décentralisées

    Get PDF
    International audienceRecent increase in demand for next-to-source data processing and low-latency applications has shifted attention from the traditional centralized cloud to more distributed models such as edge computing. In order to fully leverage these models it is necessary to decentralize not only the computing resources but also their management. While a decentralized cloud has various inherent advantages, it also introduces different challenges with respect to coordination and collaboration between resources. A large-scale system with multiple administrative entities requires an overlay network which enables data and service localization based only on a partial view of the network. Numerous existing overlay networks target different properties but they are built in a generic context, without taking into account the specific requirements of a decentralized cloud. In this paper we identify some of these requirements and introduce Koala, a novel overlay network designed specifically to meet them

    Koala: Towards Lazy and Locality-Aware Overlays for Decentralized Clouds

    Get PDF
    International audienceCurrent cloud computing infrastructures and their management are highly centralized, and therefore they suffer from limitations in terms of network latency, energy consumption, and possible legal restrictions. Decentralizing the Cloud has been recently proposed as an alternative. However, the efficient management of a geographically dispersed platform brings new challenges related to service localization, network utilization and locality-awareness. We here consider a cloud topology composed of many small datacenters geographically dispersed within the backbone network. In this paper, we present the design, development and experimental validation of Koala, a novel overlay network that specifically targets such a geographically distributed cloud platform. The three key characteristics of Koala are laziness, latency-awareness and topology-awareness. By using application traffic, Koala maintains the overlay lazily while it takes locality into account in each routing decision. Although Koala's performance depends on application traffic, through simulation experiments we show that for a uniformly distributed traffic, Koala delivers similar routing complexity and reduced latency compared to a traditional proactive protocol, such as Chord. Additionally, we show that despite its passive maintenance, Koala can appropriately deal with churn by keeping the number of routing failures low, without significantly degrading the routing performance. Finally, we show how such an overlay adapts to a decentralized cloud composed of multiple small datacenters

    ShareLatex on the Edge: Evaluation of the Hybrid Core/Edge Deployment of a Microservices-based Application

    Get PDF
    International audienceCollaborative web applications benefit from good responsiveness. This can be difficult to achieve with deployments on core data centers subject to high network latencies. Hybrid deployments using a mix of core and edge resources closer to end users are a promising alternative. Many challenges are associated with hybrid deployments of applications, starting from their decomposition into components able to be replicated dynamically onto edge resources to the management and consistency of these components’ state. We report on our experience with the hybrid deployment of ShareLatex, a legacy collaborative web application. We show how its design based on the use of microservices and resource-oriented APIs allow for an efficient modular decomposition. We detail how we adapted the application configuration for a hybrid deployment with no modification to its source code. Our experiments using a fleet of emulated users show that the use of a hybrid deployment for this legacy collaborative application can decrease user-perceived application latencies for common operations at the cost of increasing them for operations involving core/edge coordination traffic

    Split and migrate: Resource-driven placement and discovery of microservices at the edge

    Get PDF
    Microservices architectures combine the use of fine-grained and independently-scalable services with lightweight communication protocols, such as REST calls over HTTP. Microservices bring flexibility to the development and deployment of application back-ends in the cloud. Applications such as collaborative editing tools require frequent interactions between the front-end running on users' machines and a back-end formed of multiple microservices. User-perceived latencies depend on their connection to microservices, but also on the interaction patterns between these services and their databases. Placing services at the edge of the network, closer to the users, is necessary to reduce user-perceived latencies. It is however difficult to decide on the placement of complete stateful microservices at one specific core or edge location without trading between a latency reduction for some users and a latency increase for the others. We present how to dynamically deploy microservices on a combination of core and edge resources to systematically reduce user-perceived latencies. Our approach enables the split of stateful microservices, and the placement of the resulting splits on appropriate core and edge sites. Koala, a decentralized and resource-driven service discovery middleware, enables REST calls to reach and use the appropriate split, with only minimal changes to a legacy microservices application. Locality awareness using network coordinates further enables to automatically migrate services split and follow the location of the users. We confirm the effectiveness of our approach with a full prototype and an application to ShareLatex, a microservices-based collaborative editing application

    Kangaroo: A Tenant-Centric Software-Defined Cloud Infrastructure

    Get PDF
    International audienceApplications on cloud infrastructures acquire virtual machines (VMs) from providers when necessary. The current interface for acquiring VMs from most providers, however, is too limiting for the tenants, in terms of granularity in which VMs can be acquired (e.g., small, medium, large, etc.), while giving very limited control over their placement. The former leads to VM underutilization, and the latter has performance implications, both translating into higher costs for the tenants. In this work, we leverage nested virtualization and a networking overlay to tackle these problems. We present Kangaroo, an OpenStack-based virtual infrastructure provider, and IPOPsm, a virtual networking switch for communication between nested VMs over different infrastructure VMs. In addition, we design and implement Skippy, the realization of our proposed virtual infrastructure API for programming Kangaroo. Our benchmarks show that through careful mapping of nested VMs to infrastructure VMs, Kangaroo achieves up to an order of magnitude better performance, with only half the cost on Amazon EC2. Further, Kangaroo's unified OpenStack API allows us to migrate an entire application between Amazon EC2 and our local OpenNebula deployment within a few minutes, without any downtime or modification to the application code

    ShareLatex on the Edge : Evaluation of the Hybrid Core/Edge Deployment of a Microservices-based Application

    No full text
    Collaborative web applications benefit from good responsiveness. This can be difficult to achieve with deployments on core data centers subject to high network latencies. Hybrid deployments using a mix of core and edge resources closer to end users are a promising alternative. Many challenges are associated with hybrid deployments of applications, starting from their decomposition into components able to be replicated dynamically onto edge resources to the management and consistency of these components' state. We report on our experience with the hybrid deployment of ShareLatex, a legacy collaborative web application. We show how its design based on the use of microservices and resource-oriented APIs allows for an efficient modular decomposition. We detail how we adapted the application configuration for a hybrid deployment with no modification to its source code. Our experiments using a fleet of emulated users show that the use of a hybrid deployment for this legacy collaborative application can decrease user-perceived application latencies for common operations at the cost of increasing them for operations involving core/edge coordination traffic
    corecore